test signal
Learning Beyond Experience: Generalizing to Unseen State Space with Reservoir Computing
Norton, Declan A., Zhang, Yuanzhao, Girvan, Michelle
Machine learning techniques offer an effective approach to modeling dynamical systems solely from observed data. However, without explicit structural priors -- built-in assumptions about the underlying dynamics -- these techniques typically struggle to generalize to aspects of the dynamics that are poorly represented in the training data. Here, we demonstrate that reservoir computing -- a simple, efficient, and versatile machine learning framework often used for data-driven modeling of dynamical systems -- can generalize to unexplored regions of state space without explicit structural priors. First, we describe a multiple-trajectory training scheme for reservoir computers that supports training across a collection of disjoint time series, enabling effective use of available training data. Then, applying this training scheme to multistable dynamical systems, we show that RCs trained on trajectories from a single basin of attraction can achieve out-of-domain generalization by capturing system behavior in entirely unobserved basins.
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- North America > United States > New Mexico > Santa Fe County > Santa Fe (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- Energy (0.93)
- Government > Regional Government > North America Government > United States Government (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.67)
Deep Time Warping for Multiple Time Series Alignment
Nourbakhsh, Alireza, Mohammadzade, Hoda
Time Series Alignment is a critical task in signal processing with numerous real-world applications. In practice, signals often exhibit temporal shifts and scaling, making classification on raw data prone to errors. This paper introduces a novel approach for Multiple Time Series Alignment (MTSA) leveraging Deep Learning techniques. While most existing methods primarily address Multiple Sequence Alignment (MSA) for protein and DNA sequences, there remains a significant gap in alignment methodologies for numerical time series. Additionally, conventional approaches typically focus on pairwise alignment, whereas our proposed method aligns all signals in a multiple manner (all the signals are aligned together at once). This innovation not only enhances alignment efficiency but also significantly improves computational speed. By decomposing into piece-wise linear sections, we introduce varying levels of complexity into the warping function. Additionally, our method ensures the satisfaction of three warping constraints: boundary, monotonicity, and continuity conditions. The utilization of a deep convolutional network allows us to employ a new loss function, addressing some limitations of Dynamic Time Warping (DTW). Experimental results on the UCR Archive 2018, comprising 129 time series datasets, demonstrate that employing our approach to align signals significantly enhances classification accuracy and warping average and also reduces the run time across the majority of these datasets.
Tailored Forecasting from Short Time Series via Meta-learning
Norton, Declan A., Ott, Edward, Pomerance, Andrew, Hunt, Brian, Girvan, Michelle
Machine learning (ML) models can be effective for forecasting the dynamics of unknown systems from time-series data, but they often require large amounts of data and struggle to generalize across systems with varying dynamics. Combined, these issues make forecasting from short time series particularly challenging. To address this problem, we introduce Meta-learning for Tailored Forecasting from Related Time Series (METAFORS), which uses related systems with longer time-series data to supplement limited data from the system of interest. By leveraging a library of models trained on related systems, METAFORS builds tailored models to forecast system evolution with limited data. Using a reservoir computing implementation and testing on simulated chaotic systems, we demonstrate METAFORS' ability to predict both short-term dynamics and long-term statistics, even when test and related systems exhibit significantly different behaviors and the available data are scarce, highlighting its robustness and versatility in data-limited scenarios.
- Europe (0.67)
- North America > United States > Maryland (0.28)
- Health & Medicine (1.00)
- Energy > Oil & Gas (0.67)
- Government > Regional Government > North America Government > United States Government (0.67)
Proposal and Verification of Novel Machine Learning on Classification Problems
Dozono, Chikako, Aragaki, Mina, Hebishima, Hana, Inage, Shin-ichi
This paper aims at proposing a new machine learning for classification problems. The classification problem has a wide range of applications, and there are many approaches such as decision trees, neural networks, and Bayesian nets. In this paper, we focus on the action of neurons in the brain, especially the EPSP/IPSP cancellation between excitatory and inhibitory synapses, and propose a Machine Learning that does not belong to any conventional method. The feature is to consider one neuron and give it a multivariable Xj (j = 1, 2,.) and its function value F(Xj) as data to the input layer. The multivariable input layer and processing neuron are linked by two lines to each variable node. One line is called an EPSP edge, and the other is called an IPSP edge, and a parameter {\Delta}j common to each edge is introduced. The processing neuron is divided back and forth into two parts, and at the front side, a pulse having a width 2{\Delta}j and a height 1 is defined around an input X . The latter half of the processing neuron defines a pulse having a width 2{\Delta}j centered on the input Xj and a height F(Xj) based on a value obtained from the input layer of F(Xj). This information is defined as belonging to group i. In other words, the group i has a width of 2{\Delta}j centered on the input Xj, is defined in a region of height F(Xj), and all outputs of xi within the variable range are F(Xi). This group is learned and stored by a few minutes of the Teaching signals, and the output of the TEST signals is predicted by which group the TEST signals belongs to. The parameter {\Delta}j is optimized so that the accuracy of the prediction is maximized. The proposed method was applied to the flower species classification problem of Iris, the rank classification problem of used cars, and the ring classification problem of abalone, and the calculation was compared with the neural networks.
- Asia > Japan > Kyūshū & Okinawa > Kyūshū > Fukuoka Prefecture > Fukuoka (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Asia > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
Learning Bayes' theorem with a neural network for gravitational-wave inference
Chua, Alvin J. K., Vallisneri, Michele
In the Bayesian analysis of signals immersed in noise [1], we seek a representation for the posterior probability of one or more parameters that govern the shape of the signals. Unless the parameter-to-signal map (the forward model) is very simple, the analysis (or inverse solution) comes at significant computational cost, as it requires the stochastic exploration of the likelihood surface at a large number of locations in parameter space. Such is the case, for instance, of parameter estimation for gravitational-wave sources such as the compact binaries detected by LIGO-Virgo [2, 3]; here each likelihood evaluation requires that we generate the gravitational waveform corresponding to a set of source parameters, and compute its noise-weighted correlation with detector data [4]. Waveform generation is usually the costlier operation, so gravitational-wave analysts often utilize faster, less accurate waveform models [5, 6], or accelerated surrogates of slower, more accurate models [7]. Extending the analysis from the data we have to the data we might measure (i.e., characterizing the parameter-estimation prospects of future experiments) compounds the expense, since we need to explore posteriors for many noise realizations, and across the domain of possible source parameters. For concreteness, we price the evaluation of a single Bayesian posterior at null 10 6 times the cost of generating a waveform, and the characterization of parameter-estimation prospects at null 10 6 times the cost of a posterior. With current computational resources, this means that (for instance) accurate component-mass estimates only become available hours or days after the detection of a binary black-hole coalescence [8, 9], while any extensive study of parameter-estimation prospects must rely on less reliable techniques such as the Fisher-matrix approximation [10]. In this Letter, we show how one-or two-dimensional marginalized Bayesian posteriors may be produced using deep neural networks [11] trained on large ensembles of signal noise data streams.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Los Angeles County > Pasadena (0.04)
- Asia > China (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.64)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.50)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.49)